15 research outputs found

    The relation between the shock-induced free-surface velocity and the postshock specific volume of solids

    Get PDF
    The release of solids from a state of shock compression at a free surface is examined. For isentropic release, the postshock specific volume V[prime]0 is shown to be constrained by V[prime]0? (Ufs–Up)2/P1+V1, where (P1,V1) is the pressure-volume Hugoniot state of shock compression and Ufs and Up are the free-surface and shock particle velocities, respectively. When a sudden phase change occurs during the release process, this lower bound is increased, subject to simplifying assumptions about the phase transition

    Shock temperature measurements in Mg_2SiO_4 and SiO_2 at high pressures

    Get PDF
    Temperatures in the high pressure shock state have been determined by measurement of optical radiation from pure samples of forsterite (Mg_2SiO_4), α-quartz, and fused silica. Shock waves of known amplitude were produced by tantalum flyer impact using a two-stage light gas gun. Shock pressures in the ranges 150-175 GPa and 70-115 GPa for Mg_2SiO_4 and SiO_2 respectively were achieved, and temperatures in the range 4500-6800 K were measured. The observed temperatures in Mg_2SiO_4 are consistent with the occurrence of a shock-induced phase transition with a transition energy of ∼ 1.5 MJ/kg. Measured Hugoniot temperatures versus pressure in both fused and crystalline SiO_2 shocked to the stishovite regime suggest the occurrence of a previously unknown transition, beginning at pressures of approximately 107 GPa and 70 GPa for α-quartz and fused quartz, respectively. The energies and temperatures appear to be consistent with the onset of melting of stishovite under shock loading

    One-dimensional isentropic compression

    Get PDF
    The generation of nearly isentropic pressure‐density states in a molecular fluid sample, e.g. H_2O is examined by a series of one‐dimensional finite difference calculations. We employ a series of buffer materials of increasing shock impedance (Lexan, Al, Fe, W) behind the sample and impact it with a composite flyer plate of the same series of materials. In the case of H_2O impacted at 2.5 km/sec, three‐fold nearly isentropic compression to a pressure of 70 GPa is achieved in 10 μsec with a 3 cm thick composite impactor

    Shock temperatures of SiO_2 and their geophysical implications

    Get PDF
    The temperature of SiO_2 in high-pressure shock states has been measured for samples of single-crystal α-quartz and fused quartz. Pressures between 60 and 140 GPa have been studied using projectile impact and optical pyrometry techniques at Lawrence Livermore National Laboratory. Both data sets indicate the occurrence of a shock-induced phase transformation at ∼70 and ∼50 GPa along the α- and fused quartz Hugoniots, respectively. The suggested identification of this transformation is the melting of shock-synthesized stishovite, with the onset of melting delayed by metastable superheating of the crystalline phase. Some evidence for this transition in conventional shock wave equation of state data is given, and when these data are combined with the shock temperature data, it is possible to construct the stishovite-liquid phase boundaries. The melting temperature of stishovite near 70 GPa pressure is found to be 4500 K, and melting in this vicinity is accompanied by a relative volume change and latent heat of fusion of ∼2.7% and ∼2.4 MJ/kg, respectively. The solid stishovite Hugoniot centered on α-quartz is well described by the linear shock velocity-particle velocity relation, u_s = 1.822 up + 1.370 km/s, while at pressures above the melting transition, the Hugoniot centered on α-quartz has been fit with u_s = 1.619 u_p + 2.049 km/s up to a pressure of ∼200 GPa. The melting temperature of stishovite near 100 GPa suggests an approximate limit of 3500 K for the melting temperature of SiO_2-bearing solid mantle mineral assemblages, all of which are believed to contain Si^(4+) in octahedral coordination with O^(2−). Thus 3500 K is proposed as an approximate upper limit to the melting point and the actual temperature in the earth's mantle. Moreover, the increase of the melting point of stishovite with pressure at 70 GPa is inferred to be ∼11 K/GPa. Using various adiabatic temperature gradients in the earth's mantle and assuming creep is diffusion controlled in the lower mantle, the current results could preclude an increase of viscosity by more than a factor of 10^3 with depth across the mantle

    Simplex GPS and InSAR Inversion Software

    Get PDF
    Changes in the shape of the Earth's surface can be routinely measured with precisions better than centimeters. Processes below the surface often drive these changes and as a result, investigators require models with inversion methods to characterize the sources. Simplex inverts any combination of GPS (global positioning system), UAVSAR (uninhabited aerial vehicle synthetic aperture radar), and InSAR (interferometric synthetic aperture radar) data simultaneously for elastic response from fault and fluid motions. It can be used to solve for multiple faults and parameters, all of which can be specified or allowed to vary. The software can be used to study long-term tectonic motions and the faults responsible for those motions, or can be used to invert for co-seismic slip from earthquakes. Solutions involving estimation of fault motion and changes in fluid reservoirs such as magma or water are possible. Any arbitrary number of faults or parameters can be considered. Simplex specifically solves for any of location, geometry, fault slip, and expansion/contraction of a single or multiple faults. It inverts GPS and InSAR data for elastic dislocations in a half-space. Slip parameters include strike slip, dip slip, and tensile dislocations. It includes a map interface for both setting up the models and viewing the results. Results, including faults, and observed, computed, and residual displacements, are output in text format, a map interface, and can be exported to KML. The software interfaces with the QuakeTables database allowing a user to select existing fault parameters or data. Simplex can be accessed through the QuakeSim portal graphical user interface or run from a UNIX command line

    Modeling of the surface static displacements and fault plane slip for the 1979 Imperial Valley earthquake

    Get PDF
    Synthesis of geodetic and seismological results for the 1979 Imperial Valley earthquake is approached using three-dimensional finite element modeling techniques. The displacements and stresses are calculated elastically throughout the modeled region. The vertical elastic structure in the model is derived from compressional and shear wave velocities as used in the seismic data analysis (Fuis et al., 1981) combined with a sediment density profile. Two strategies for applying initial conditions are followed in this modeling. In the first strategy, a sample seismological estimate for fault plane slip is used to predict the resultant surface motions. We show that the geodetic strain results over distances of tens of kilometer from the fault (Snay et al., 1982) are basically consistent with the model seismic fault displacements. Geodetic results from within a few kilometers of the fault trace (Mason et al., 1981) seem to require more slip at shallow depths than appears at seismic time scales. This is consistent with the occurrence of aftercreep at shallow depths in less well-consolidated material, which would bring surface displacements into line with maximum slip at depth, but not greatly affect the net moment. In the second strategy, we consider stresses on the fault plane, rather than displacements, as model variables. To constrain this part of our numerical modeling, we assume that the fault driving stress is governed by ambient tectonic stress and an opposing Coulomb friction derived from experiment. The coseismic stress drop from point to point on the failed fault is given by the difference between the tectonic shear stress and the frictional stress. After arriving at such a uniform model which adequately represents the Snay et al. results, we further modify a small region near the seismic “asperity” to make the fault plane motions qualitatively and quantitatively resemble the model of coseismic motions used in the first strategy. The observed offset on the fault trace (Sharp et al., 1982) is approximated in this final stress-driven model by removing the driving stress on the southern third of the fault. Thus, the principal features of the coseismic slip pattern are explained by a stress-driven fault model in which: (a) a spatially unresolved asperity is found equivalent to a stress drop of 18 MPa averaged over an area of 15 km^2, and (b) driving stress is essentially absent on the fault segment overlapping the 1940 earthquake rupture zone

    Hypercube matrix computation task

    Get PDF
    A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18)

    QuakeSim 2.0

    Get PDF
    QuakeSim 2.0 improves understanding of earthquake processes by providing modeling tools and integrating model applications and various heterogeneous data sources within a Web services environment. QuakeSim is a multisource, synergistic, data-intensive environment for modeling the behavior of earthquake faults individually, and as part of complex interacting systems. Remotely sensed geodetic data products may be explored, compared with faults and landscape features, mined by pattern analysis applications, and integrated with models and pattern analysis applications in a rich Web-based and visualization environment. Integration of heterogeneous data products with pattern informatics tools enables efficient development of models. Federated database components and visualization tools allow rapid exploration of large datasets, while pattern informatics enables identification of subtle, but important, features in large data sets. QuakeSim is valuable for earthquake investigations and modeling in its current state, and also serves as a prototype and nucleus for broader systems under development. The framework provides access to physics-based simulation tools that model the earthquake cycle and related crustal deformation. Spaceborne GPS and Inter ferometric Synthetic Aperture (InSAR) data provide information on near-term crustal deformation, while paleoseismic geologic data provide longerterm information on earthquake fault processes. These data sources are integrated into QuakeSim's QuakeTables database system, and are accessible by users or various model applications. UAVSAR repeat pass interferometry data products are added to the QuakeTables database, and are available through a browseable map interface or Representational State Transfer (REST) interfaces. Model applications can retrieve data from Quake Tables, or from third-party GPS velocity data services; alternatively, users can manually input parameters into the models. Pattern analysis of GPS and seismicity data has proved useful for mid-term forecasting of earthquakes, and for detecting subtle changes in crustal deformation. The GPS time series analysis has also proved useful as a data-quality tool, enabling the discovery of station anomalies and data processing and distribution errors. Improved visualization tools enable more efficient data exploration and understanding. Tools provide flexibility to science users for exploring data in new ways through download links, but also facilitate standard, intuitive, and routine uses for science users and end users such as emergency responders

    Finite Element Solution of Thermal Convection On A Hypercube Concurrent Computer

    No full text
    Numerical solutions to thermal convection flow problems are vital to many scientific and engineering problems. One fundamental geophysical problem is the thermal convection responsible for continental drift and sea floor spreading. The earth's interior undergoes slow creeping flow (~cm/yr) in response to the buoyancy forces generated by temperature variations caused by the decay of radioactive elements and secular cooling. Convection in the earth's mantle, the 3000 km thick solid layer between the crust and core, is difficult to model for three reasons: (1) Complex rheology -- the effective viscosity depends exponentially on temperature, on pressure (or depth) and on the deviatoric stress; (2) the buoyancy forces driving the flow occur in boundary layers thin in comparison to the total depth; and (3) spherical geometry -- the flow in the interior is fully three dimensional. Because of these many difficulties, accurate and realistic simulations of this process easily overwhelm current computer speed and memory (including the Cray XMP and Cray 2) and only simplified problems have been attempted [e.g. Christensen and Yuen, 1984; Gurnis, 1988; Jarvis and Peltier, 1982]. As a start in overcoming these difficulties, a number of finite element formulations have been explored on hypercube concurrent computers. Although two coupled equations are required to solve this problem (the momentum or Stokes equation and the energy or advection-diffusion equation), we will concentrate our efforts on the solution to the latter equation in this paper. Solution of the former equation is discussed elsewhere [Lyzenga, et al, 1988]. We will demonstrate that linear speedups and efficiencies of 99 percent are achieved for sufficiently large problems
    corecore